专利摘要:
METHOD TO OPERATE A SYSTEM THAT HAS A MONITOR, A CAMERA AND A PROCESSOR. Systems, devices and methods that make it possible to compare appearance. The system includes at least one interactive image and display station. The station includes a mirror display device capable of operating selectably in either or both mirror or display modes; an image device for capturing one or more appearances that appear in a field of view in front of the mirror display device; and / or an image control unit to select the mode of operation of the mirror display device according to a user command.
公开号:BR112015014629A2
申请号:R112015014629-5
申请日:2013-12-18
公开日:2020-09-15
发明作者:Nissi Vilcovsky;Ofer Saban
申请人:Eyesmatch Ltd;
IPC主号:
专利说明:

[001] [001] This order claims the priority benefit of US Order No. 13 / 843,001, filed on March 15, 2013, and US Provisional Order No. 61 / 738,957, filed on December 18, 2013.
[002] [002] The invention generally refers to image and display systems and, more particularly, to monitors and interactive screens, for example, in retail and / or service environments, clinical or domestic situations, videoconferencing, gaming, etc. Specific implementations refer to making a flat panel look like a mirror. Another specific implementation refers to making a flat panel provide a video of a person looking into their eyes to create an eye-to-eye video conference. RELATED TECHNIQUE
[003] [003] Customers can purchase consumer items, for example, clothing, such as clothing, for example, shirts, pants, jackets and other suits, as well as shoes, glasses and / or any other items or products, such as cosmetics, furniture and similar. Purchases typically take place at a commercial facility, for example, retail stores. Before deciding which item to buy, a customer can try on various items (for example, clothing, cosmetics) and / or present other background items (for example, furniture), and can view each user's appearance test in front of a mirror , which can be located, for example, in a test area of the retail store. For example, the customer can try on a first article, for example, a suit, and visualize that user's appearance in front of the mirror for that first test. Then, the customer can try on a second article, for example, another suit. The customer may then need to memorize their appearance as the user of the first test in order to make a mental comparison between the first article and the second article, in order to assess which of the two articles can best fit the customer.
[004] [004] Unfortunately, as the customer can try different items and / or as the second test can take a considerable amount of time after the first test or even in a different store, the customer may not be able to remember what he looks like for each test and may therefore need to repeatedly disapprove items, for example, items of clothing, previously proven. This can result in frustration and ineffective shopping experience.
[005] [005] The conventional mirror (ie, reflective surface) is the common and most reliable tool for an individual to explore real self-appearance, in real time. Few alternatives have been proposed by the prior art around the combination of a camera and a screen to replace the conventional mirror. However, these techniques are not convincing and are not yet accepted as a reliable image of the individual, as if he were looking in a conventional mirror. This is mainly due to the image generated by a camera being very different from an image generated by a mirror.
[006] [006] When a user looks in the mirror, what he really sees is the reflection of himself as if he were standing at a distance that is twice the distance from him to the mirror. This is illustrated in Figure 5A, where the user standing at a distance D1 sees himself at a distance equal to twice D1. Similarly, as shown in Figure 5B, a user standing at distance D2 will observe you at a distance 2xD2. In addition, the user's Field of View (FOV) angle changes when the user changes the distance, for example, approaching the mirror. The FOV is limited by the specular reflection angle (β) of the mirror to the user's eye and the margin of the image visible on all sides of the mirror (four sides for a rectangular or square mirror). In Figure 5B, the bottom of the vertical FOV is illustrated as twice the angle (β) formed by the lines that connect the user's eyes to the bottom of the mirror and that reflects the user's shoes. Consequently, as illustrated in Figure 5B, when the user approaches the mirror, the FOV angle increases, which is why he continues to see the same reflection size (FOV1 <FOV2), so that the user observes himself, in fact, about the same size, but closer. This is a notable difference from a camera, in that as the user approaches the camera, it appears larger in the image. This is mainly due to the FOV of a camera being fixed and determined mainly by the size of the camera lens, or focal length.
[007] [007] There are other phenomena to be observed in relation to the reflection of a mirror. For example, when the user approaches the mirror, the reflection of their eyes will always remain on the same virtual line in the mirror. Conversely, depending on the height of a camera, as the user approaches the camera, the user's eyes may appear at different levels. Another difference with a camera is that when someone looks in a mirror, their image appears to be inverted (for example, if someone raises their right hand, it will appear that their left hand rises in the mirror). However, a mirror does not "swap" left and right any more than it swaps the top of the bottom. A mirror reverses the forward / backward axis (that is, what is in front of the mirror appears to be behind the mirror), and we define right and left in relation to the front and rear. In addition, because the image in the mirror is virtual, the mirror may be smaller than the entire body and the user will still observe the reflection of their entire body. The reason is that the specular reflection (in Figure 5A, the angle of incidence β equals the angle of reflection β) can increase the effective field of view while the user approaches the mirror. In addition, although the mirror is a two-dimensional object, the user observes its appearance in three dimensions.
[008] [008] For at least some of the reasons noted above, so far no system has been provided to convincingly imitate a mirror. The imitation of a mirror can present many applications in the retail and other fields, and opens up the possibility of incorporating real life experiences with virtual life experiences, such as sharing on social networks and other mobile technologies. SUMMARY OF THE INVENTION
[009] [009] Some demonstrative embodiments of the invention include devices, systems and / or methods that make it possible to compare appearance.
[010] [010] According to some demonstrative realizations of the invention, a system that allows appearance comparison can include at least one interactive image and display station. The station may include, for example, a mirror display device capable of operating in one or both, a mirror or a display mode; an imaging device for capturing one or more appearances from a field of view in front of the mirror display device; and / or an image control unit to select the mode of operation of the mirror display device according to a user command. The mirror display device can be in the form of a flat-screen TV, in which the mirror shows the live transposed video fed from the camera, while during the display it shows a transposed video captured on a previous moment and recovered from a memory.
[011] [011] According to some demonstrative embodiments of the invention, the image control unit can include an input device to receive the user's command.
[012] [012] According to some demonstrative embodiments of the invention, the image control unit may include a storage device for storing data from one or more images that may correspond to one or more appearances.
[013] [013] According to some demonstrative embodiments of the invention, the mirror display device may be able to be divided into at least first and second simultaneously displayable structures. The first structure can be selectively operable, for example, both in a mirror mode and in a display mode. The second structure can be operable, for example, in a mirror mode.
[014] [014] According to some demonstrative embodiments of the invention, the imaging device may be able to capture images of three-dimensional appearances.
[015] [015] According to some demonstrative embodiments of the invention, the mirror display device may be able to display appearance images in predefined sequences.
[016] [016] According to some demonstration achievements of the invention, the image control unit may be able to selectively enable a user's access to images of authorized appearances to the user, for example, based on user identification data received from this.
[017] [017] According to some demonstrative embodiments of the invention, at least one interactive image and display system may include two or more interactive image and display stations capable of communicating with a network. For example, the two or more stations may be able to communicate with each other data that represents appearance images.
[018] [018] According to some demonstrative embodiments of the invention, the image control unit can control the mirror display device to display, for example, during display mode, one or more images that correspond to appearances. One or more images can include, for example, one or more mirrored appearances. Mirrored appearances are obtained by transposing the images or video feed obtained from a camera to generate images and video that, when presented on a monitor, resemble an appearance in a mirror.
[019] [019] According to some demonstrative embodiments of the invention, a method that makes it possible to compare appearance may comprise the use of a mirror mode of operation of a mirror display device capable of being selectively operated in either a mirror mode or a mirror mode. display mode; capturing an image that corresponds to the appearance of a first test in front of the mirror display device; storage of the first test image; selecting the display mode of operation of the mirror display device; and / or recovering the image from the first test and displaying the image on the mirror display device.
[020] [020] According to additional realizations, the methods and device are provided using a camera and a flat panel monitor to create a convincing mirror appearance. BRIEF DESCRIPTION OF THE DRAWINGS
[021] [021] The subject considered as the invention is particularly highlighted and distinctly claimed at the end of the specification. The invention, however, both for organization and method of operation, with the characteristics and advantages of these, can be better understood by reference to the detailed description below when reading with the attached drawings, in which:
[022] [022] It will be estimated that, for simplicity and clarity of illustration, the elements shown in the figures were not necessarily drawn accurately or to scale. For example, the dimensions of the same elements can be exaggerated in relation to other elements for clarity or the various physical components included in an element. In addition, when considered appropriate, reference numerals may be repeated between figures to indicate corresponding or similar elements. It will be appreciated that these figures show examples of embodiments of the present invention and are not intended to limit the scope of the invention. DETAILED DESCRIPTION
[023] [023] In the description, below, several aspects of the present invention will be described. For purposes of explanation, specific settings and details are established in order to provide a complete understanding of the present invention. However, it will be apparent to the person skilled in the art that the present invention can be practiced without the specific details presented herein. In addition, some features of the invention that fall under principles and implementations known in the art can be omitted or simplified to avoid obscuring the present invention.
[024] [024] Some demonstrative embodiments of the invention may include an interactive system that allows a user to compare one or more appearances, for example, to compare between different appearances, for example, as described in detail below.
[025] [025] The term “user appearance”, as used here, can relate to a customer's appearance while testing a customer article. The article may include, for example, clothing, such as clothing, shoes, glasses, clothing, ties, and the like; an article, for example, furniture, located in the vicinity of the customer; as well as other items, articles, designs or products, such as cosmetics, hairstyles, haircuts, etc. In a similar way, the achievements can be used for technical use, for example, for fashion designers who want to observe the versions of drawings, it can be for use by people at home who are creating a life-size album throughout the lives of family members , etc.
[026] [026] According to some demonstrative embodiments of the invention, the system may include an imaging device capable of capturing the user's appearances and a mirror display device capable of operating selectably as a mirror or as a monitor. When in mirror mode, the mirror display device can enable the user to evaluate and / or visualize a user's appearance from a current test of a consumable. As will be described below, this can be accomplished using a live video feed for a flat panel monitor, in which the video feed is modified prior to the screen display, in such a way as to imitate a mirror. When in the display mode, the mirror display device can enable the user to evaluate and / or view one or more of the user's appearances, for example, as previously captured by the imaging device, from a previous test, for example, as described in detail below .
[027] [027] Reference is made to FIG. 1, which schematically illustrates an interactive system 100 in accordance with some demonstrative embodiments of the invention.
[028] [028] According to some demonstrative embodiments of the invention, system 100 may include an interactive imaging and display station 110, which may include an image control unit 120, an imaging device 130 (for example, a camera or video) and a mirror display device 140 (for example, a flat panel monitor). The image control unit
[029] [029] Aspects of the invention are described here in the context of demonstrative embodiments of an imaging device, for example, imaging device 130, a mirror display device, for example, mirror display device 140, and / or a display unit image control, for example, image control unit 120, being separate units from an appearance comparison system, for example, system 100. However, it will be appreciated by those skilled in the art that the invention is not limited in this regard and that, according to other embodiments of the invention, the system may include any suitable configuration, combination and / or arrangement of the imaging device, the mirror display device and / or the image control unit. For example, the system may include an integrated module that includes the mirror display device, the image device and / or the image control unit. For example, the imaging device and / or the imaging control unit can be implemented as an integral part of the mirror display device.
[030] [030] According to some demonstrative embodiments of the invention, the mirror display device 140 may be configured and / or may include components and mechanisms that allow the mirror display device 140 to operate selectably in two modes of operation. In a first mode of operation (the "mirror mode"), the mirror display device 140 can operate as a mirror. In a second mode of operation (the "display mode"), the mirror display device 140 can operate as a monitor. When mirror display device 140 operates in mirror mode, user 111 of system 100 can evaluate and / or view the user's appearance from a first test of a consumable item, as reflected by mirror display device 140 in time. real. Imaging device 130 can capture an image of the user's appearance from the first test. The captured image can be stored, for example, by the storage device 123, for example, as described below. User 111 can then pose in front of mirror display device 140 by testing a second article, and image device 130 can capture a second image of the user's appearance from the second test. The user may be able to view the second article in real time using the mirror display device 140 to establish the mirror mode. When mirror display device 140 is switched to display mode, mirror monitor 140 can be controlled to display one or more of the previously captured images. Because of the ability of the mirror display device 140 to operate selectably in mirror or display mode, user 111 may be able to compare simultaneously or sequentially between the user's appearances in the first and second tests, as described in detail below.
[031] [031] In some demonstrative embodiments of the invention, for example, as shown in FIGS. 1, 2A, 2B, 3A, and / or 3B, controller 121 can control device 140 to display, during operation display mode, a mirror image of appearances. However, it will be appreciated by those skilled in the art that the invention is not limited in this regard, and that in other embodiments the controller can control device 140 to display, during operation display mode, any other image that matches the appearance, for example for example, a rotated appearance, a revered appearance, a substantially unchanged appearance, for example, frontal, a presented appearance, and the like, for example, as described below.
[032] [032] Device 140 may include any configuration and / or suitable mechanism to enable selectable operation of mirror display device 140 in the first and second operating modes. For example, in one embodiment, device 140 may include an array of liquid crystal elements (LC), which may alter its optical attributes, such as reflectivity, refractive index and the like, depending, for example, on a voltage applied to liquid crystals. For example, the application of a first voltage can result in a change in the optical attributes of the liquid crystals in such a way that the mirror display device 140 can operate as a mirror; and a second voltage can result in a change in the optical attributes of the liquid crystals in such a way that the mirror display device 140 can operate as a liquid crystal monitor.
[033] [033] In another embodiment of the invention, for example, the mirror display device 140 may include a liquid crystal display (LCD) device incorporated in a semi-reflective or unidirectional mirror. Consequently, if the LCD is switched to an idle mode of operation, mirror monitor 140 may passively reflect sufficient incident light on it to enable the user to view the user's appearances in reasonable quality and brightness. In contrast, when the LCD is turned on in an active operating mode, the images displayed by the LCD device can be viewed by the user 111 because they can be significantly brighter than the residual reflections from the surface of the mirror monitor. In other embodiments, the system is implemented on a stand-alone computer, without the monitor display itself, so that the system buyer can supply his own monitor or have the settings where the system is used without screens, but remotely controlled by devices or tablets customer's mobile devices, so they can watch their appearances on their mobile device and remotely control the recording.
[034] [034] Also in some cases, the camera may be far from the monitor, for example, in luxury brands, they can incorporate the screen into the wall and place the camera with appropriate lenses in a remote location.
[035] [035] According to some demonstrative realizations of the invention, the mirror display device 140 can be implemented by an already HD LCD mirror TV, such as, for example, model No 32 PM8822/10 available from Royal Philips Electronics, for example, as described on the Internet site <http://www.research.philips.com/newscenter/archive/2003/mirr ortv.html->. Such a device may include, for example, a polymer-based organic light emission monitor (OLED). Mirror display device 140 may include any other suitable device that implements any suitable display technology. For example, device 140 may include a nano emission monitor (NED); a plasma display screen (PDP); a cathode ray tube (CRT) screen; a Digital Light Processing (DLP) screen; a surface conduction electron emitting screen (SED); a Tablet screen; a SED flat screen; an organic electronic screen; electronic paper; a three-dimensional screen, for example, a hologram screen; a thin film resistor (TFT) screen; an optical TFT; a Dot Matrix LED screen; an LCD screen that has CCD capabilities, for example, such that the mirror monitor 140 may be able to realize the functionality of an imaging device 130; an LCD screen for painting; a surface conduction electron emitting screen (SED); a high definition television (HDTV) screen; a rear projector display device, and the like.
[036] [036] According to some demonstrative embodiments of the invention, the imaging device 130 can be adapted to capture one or more appearances from a Field of View (FOV) in front of the mirror display device 140. The FOV in front of the mirror display device 140 may include, for example, a field, area, scenario, zone and / or region in front of mirror display device 140. For example, FOV may include at least part of a field, area, scenario, zone and / or region captured by mirror display device 140 when in mirror mode.
[037] [037] Although the scope of the present invention is not limited in this regard, the imaging device 130 may be or may include, for example, a CCD camera, a video camera, a camera and / or camera configuration that makes it possible to capturing 3D images, for example, a stereoscopic camera and the like. A stereoscopic camera can be adapted, for example, to capture a 3-D image of the user's appearance. The stereoscopic camera can include, for example, two lenses that have a distance between them that can correspond to a distance between two human eyes. Consequently, the stereo camera may be able to simulate human binocular vision, also known as stereophotography, thus being able to capture a 3D image.
[038] [038] According to some demonstrative embodiments of the invention, station 110 can be a stand-alone unit, which can be located in an appearance comparison area of a desired location, for example, an office, a residence or a retail store , for example, clothing store.
[039] [039] According to other demonstration embodiments of the invention, station 110 can be connected, for example, via network interface 122, to a network, for example, a network 150, thus enabling communication between station 110 and one or plus other stations affiliated to network 150, such as station 160 and / or station 170.
[040] [040] According to some demonstrative embodiments of the invention, station 110 may include network interface 122, which can be adapted to interact with network 150, to send and receive information from other stations on network 150, as described herein. Such information may include, but is not limited to, data corresponding to images of users captured at various stations in the system 100, for example, station 160 and / or 170, as well as user identification information to enable secure access to the system, as described in more detail below. The network 150 may include, for example, a local area network (LAN), a wide area network (WAN), a global communication network, for example, the Internet, a wireless communication network, such as , a wireless LAN communication network (WLAN), a Bluetooth network, a virtual wireless private network (VPN), a cellular communication network, for example, a 3.sup.rd Generation Partnership Project (3GPP), such as, for example, the Front and Back Frequency Domain (FDD) network, a Global System for mobile communications (GSM) network, a Multiple Access cellular communication network to the broadband code division (WCDMA), and the like.
[041] [041] According to some demonstration embodiments of the invention, one or more of stations 160 and 170 can be portable devices. Non-limiting examples of such portable devices may include a mobile phone, a laptop computer, a notebook computer, a mobile computer, a tablet computer, a Personal Communication Systems (PCS) device, Personal Digital Assistants (PDA), a device wireless communication device, a PDA device that incorporates a wireless communication device, a cell phone, a cordless phone, a smartcard, a token, a memory card, a memory unit, and the like. In some embodiments of the invention, one or more of stations 160 and 170 may be non-portable devices, such as, for example, a desktop computer, a television set, a server computer, and the like.
[042] [042] According to some embodiments of the invention, system 100 can also include a control center 190, which can be connected to stations 110, 160 and / or 170, for example, via network 150. Control center 190 can receive and storing data, which can represent, for example, user appearance data and / or images, received from one or more of stations 110, 160 and / or 170.
[043] [043] According to some embodiments of the invention, stations 110, 160 and / or 170 may be located in different locations, such as, for example, different stores in a store chain. In addition, stations 110, 160 and / or 170 can be located in different locations in a building, for example, different floors, different sections on the same floor and the like. Such locations may include, for example, clothing stores, shoe stores, retail outlets, concept showrooms, exhibitions, shopping malls, opticians, cosmetics stores, sports clubs, health institutes, fitness centers, airports, train stations , cafeterias, restaurants, hotels, residences, and the like. One or more of stations 110, 160 and 170 can also be used for interactive outdoor signs. For example, imaging device 130 can, for example, capture images that can be displayed on a billboard (not shown). System 100 can enable user 111 to choose an image to be displayed on a billboard from a plurality of images from, for example, several previous clothing tests.
[044] [044] According to some demonstrative realizations of the invention, images of user appearances can be viewed in different locations. For example, imaging device 130 can capture an image from the first test. The image can then be sent from network interface 122 via network 150 using signals 151 and 152, for example, to station 160. Consequently, user 111 may be able to view the image of the user's first appearance at station 160 Therefore, user 111 may be able to view, for example, the appearance of the user of the first test in a first store in a chain of clothing stores, for example, a store associated with station 110; and can compare the appearance of the user of the first test with the appearance of the user of a second test, which can occur in a second store in the same network or an affiliated network, for example, a store associated with station 160; and / or at a different time, for example, one or more hours, days or weeks after the first test.
[045] [045] According to another demonstrative embodiment of the invention, the imaging device 130 can capture an image of a user appearance from a first test, and send the image via network interface 122 to network 150 to control center 190, where the image can be stored for later retrieval. Consequently, user 111 can gain access to the images of the first test when accessing any station, for example, station 160, connected to network 150 with control center 190.
[046] [046] According to some demonstration embodiments of the invention, storage device 123 may include, for example, a hard disk drive, a floppy drive, a Compact Disc (CD) drive, a CD-ROM drive, a Versatile Digital Disk drive
[047] [047] According to some demonstrative embodiments of the invention, controller 121 may be or may include, for example, a Central Processing Unit (CPU), a Digital Signal Processor (DSP), a microprocessor, a controller, a chip , a microchip, an Integrated Circuit (IC), or any other specific or universal processor or controller, for example, as they are known in the art.
[048] [048] Input device 124 may include, for example, a keyboard; a remote control; a motion sensor; a pointing device, such as a laser pointer; a mouse; a touchpad; a touch screen, which can be incorporated, for example, in the mirror display device 140, or can be implemented by any other suitable unit, for example, separate from device 140; a biometric input device, for example, a fingerprint scanner, and / or a face scanning camera; and / or any other pointing device or suitable input device. Input device 124 may be adapted to receive user identification data, for example, to enable, for example, accessing secure access, from user 111 to system 100, as described in detail below.
[049] [049] According to some demonstration embodiments of the invention, user 111 can provide user commands to input device 124 to operate the imaging device 130. Input device 124 may include, for example, an interface to enable the user 111 of system 100 define the operating parameters of the imaging device 130. Controller 121 can receive inputs from user 111 via signals 131 and control the operation of imaging device 130 accordingly. User commands can include, for example, commands that refer to the moment of capturing an image, the positioning of the imaging device 130, for example, according to an automatic tracking algorithm that can follow, for example, the position User 111, and / or image attributes such as focus, camera position, capture angle, dynamic range and the like. User commands may also include commands for defining image capture operating modes of the imaging device 130, such as, for example, a video capture mode, a photographic mode, and the like. According to some embodiments of the invention, the imaging device 130 may include a sound input device, for example, a microphone and / or a sound output device, for example, a speaker. Likewise, the imaging device can receive audio signals, for example, voice signals generated by user 111, which can be recorded and stored, for example, on storage device 123 and reproduce audio signals through the imaging device. sound output. The sound output device may be able to reproduce any other type of audio signals, such as radio programs, compact disc recordings and the like.
[050] [050] According to some demonstrative embodiments of the invention, controller 121 can, for example, establish the mode of operation of the mirror display device 140 according to the command received from user 111. For example, if the mirror display device 140 operates in mirror mode, the system user 100 can provide the input device with an exchange command, for example, by pressing a designated button on the input device 124, to switch the mirror display device 140 to the display mode . Controller 121 can receive input from input device 124 and can control device 140 to change the operating display mode, for example, using signals 141.
[051] [051] According to some demonstrative embodiments of the invention, the imaging device 130 can be mounted in various positions, such as, for example, at the top, below or on the side of the mirror display device 140, thus capturing an image of a appearance of the user which can be an image of a particular clothing test, an image of the user 111 with different articles, for example, furniture and / or pose with different clothes and the like. In some embodiments of the invention, the imaging device 130 can capture an appearance of the user as it appears on the mirror display device 140, i.e., a mirror image of the appearance of the user. In other embodiments, the imaging device 130 can capture the appearance and the controller 121 can generate a mirror image that corresponds to the appearance captured by the imaging device 130. For example, storage 123 can store instructions that when executed by the controller can result in any suitable method or algorithm for rotating, reversing and / or mirroring the appearance captured by the imaging device 130, thereby generating image data that represents the rotated, inverted and / or mirrored image of the image captured by the device 130. According to these achievements, controller 121 can control mirror display device 140 to display, during operation display mode, the rotated, inverted and / or mirrored image. In other embodiments, controller 121 may control mirror display device 140 to display, during the operating display mode, an image that corresponds to the image captured by device 130, for example, an un-mirrored, un-rotated and / or un-mirrored image. inverted. In some embodiments, the imaging device 130 may not be visible to the user 111, may be located behind the display device 140 and / or may be incorporated into the mirror display device 140, which may be or may include, for example, a LCD-CCD device capable of both displaying and capturing images. For example, in a demonstrative embodiment of the invention, device 140 may include an arrangement, screen or surface, for example, which includes liquid crystals, to realize mirror display functionality, for example, as described above, as well as the functionality of image of the imaging device 130, for example, the device 140 may include a mirror image display device.
[052] [052] In some demonstration embodiments of the invention, one or more of stations 110, 160 and / or 170 may not include the image capture device 130; and / or one or more of stations 110, 160 and / or 170 may not include mirror display 140. For example, a first system station 100 may include only imaging device 130, and may not include, for example, mirror display device 140. User 111 can use the first station to capture the image of the first test of the user's appearance, for example, without being able to view the image resulting from the first test on the first station. User 111 can later view the captured image from the first test at another station on system 100, which may include mirror display device 140.
[053] [053] According to some demonstrative embodiments of the invention, the imaging device 130 can be positioned so that it is possible to capture an image and / or a sequence of images, videos or the like, of a scenario that occurs in front of the device mirror display 140. In addition or alternatively, the imaging device 130 can be positioned so that it is possible to capture a reflected image from the mirror display device 140. For example, the imaging device 130 may be able to capture a image of user 111 posing in front of mirror display device 140. Despite posing in front of mirror display device 140, user 111 can check his appearance, for example, in a first suitability test. According to an input provided by the user 111 in the input device 124, the imaging device 130 can capture the image of the user's appearance which can be, for example, a particular test of clothing, for example, a garment and the like. It can be seen that tests by user 111 can also include user 111 getting involved with various themes, which can be located in the vicinity of user 111, such as furniture, studio installation and the like. Consequently, the imaging device 130 can capture images of the user's appearances from, for example, a first test, a second test, etc., and can send the respective captured images to the storage device 123 via signals 131 and signals 30. The user 111 may be able to retrieve the captured image from, for example, the first test, at a later time, for example, after the second or subsequent tests, and can compare between the first and the second or other tests, for example , as described below with reference to FIGS. 2A, 2B and 2C.
[054] [054] According to some demonstrative embodiments of the invention, the storage device 123 can be adapted to receive the data representing the images captured by the imaging device 130, and store the appearance images and, more specifically, the user's appearances , for example, certain clothing tests, captured by the imaging device 130. Images from certain user appearance tests can be retrieved from storage device 123, for example, by controller 121, and displayed by monitor 140. User 111 you can compare between the displayed images, for example, as described in detail below.
[055] [055] According to some demonstrative embodiments of the invention, storage device 123 may include data representing, for example, software algorithms, which require and / or verify user identification data, such as user ID, password, moment of authentication and biometric data and the like to enable secure access to station 110, as described in detail below. For example, controller 121 can control mirror display device 140 to display images that match a user identity 111, for example, based on the identity data provided by user 111. For example, user 111 can provide input 124 with the user identification entry, which may include, for example, a biometric entry, such as face recognition, a handprint, a fingerprint, an eye print, voice recognition and the like. The user identification entry can include any other suitable entry, for example, a credit card, a personal identification number (PIN), a password, a smartcard, a customer card, a club card, or the like. Controller 121 verifies, for example, based on any suitable method and / or algorithm, that the user identification entry provided in the input device 124 matches the user identification data that may be stored, for example, in the storage 123 or control center 190. The software that has the ability to verify a biometric entry can be, for example, “FaceVision Active ID Technology” provided by Geometric Inc. if controller 121 matches user entry 111 with the stored user identification data, controller 121 may enable user 111 to access data representing, for example, images of user 111's previous user appearances.
[056] [056] According to some demonstrative embodiments of the invention, the storage device 123 may include data that represent, for example, software algorithms that enable additional features of the system, such as, for example,
[057] [057] According to some demonstrative embodiments of the invention, controller 121 can provide, for example, image and / or video search capabilities, image and / or video playback functions, whose capabilities and functions can be predefined by the system 100 or can be defined, for example, in operation, according to one or more user commands received from user 111 via input device 124. For example, controller 121 may be able to retrieve one or more images of user appearances and display the images on the mirror display device 140 in various sequences. For example, images from previous tests may be displayed in a substantially continuous, counter-sense, or mixed anticipated sequence, for example, a randomly accessible sequence and / or may be a gradual sequence or in any other sequence. In addition, images from previous tests can be displayed simultaneously on the mirror display device 140, for example, as described below. Controller 121 may also be able to delete previously captured user appearances, limit the amount of data that can be saved on storage device 123 and the like, and can also control a shape, size, color, etc., of the displayed image on mirror display device 140.
[058] [058] According to some demonstrative embodiments of the invention, user 111 can use a portable storage device 180 capable of storing one or more of the captured images. The portable storage device can include any suitable portable storage device, for example, a smartcard, a disk-on-key device, and the like. User 111 can download, for example, images represented by, for example, signals 50, from a first appearance test of the user of the storage device 123, for example, via a storage interface 125 or via any other data connection proper. User 111 can then later upload the image, for example, of the first test, to another location, for example, the residence of user 111, or another station on system 100, for example, station 170.
[059] [059] In some embodiments of the invention, station 110 may include more than one mirror display device or it may include a mirror display device 140, which can be divided simultaneously into two structures as hereinafter described with reference to FIGS. 2A and 2B.
[060] [060] According to some demonstrative embodiments of the invention, controller 121 can record or store, for example, in storage 123, the parameters that characterize user 111. For example, system 100 may include a scale, connected, for example , to the storage device 123 via controller 121. Controller 121 may be able to record, for example, the weight of user 111 during, for example, an article test. Consequently, user 111 can later retrieve the parameter which can be, for example, user 111's weight.
[061] [061] Reference is now made to FIGS. 2A and 2B, which schematically illustrate the stages of comparison between appearances using an interactive system in accordance with some demonstrative embodiments of the invention.
[062] [062] According to some demonstrative embodiments of the invention, the mirror display device 140 can be divided into two structures, in which one structure can operate in the form of a mirror structure 192 and another structure 191 can operate selectively as a mirror and with a display structure. As shown in FIG. 2A, user 111 can pose in front of mirror structure 192 a first test, which can be captured by imaging device 130 and stored in storage device 123. Hereinafter, as indicated in FIG. 2B, user 111 can simultaneously view in frame 191 the image of the user's appearance from the first test and / or any other user appearances, for example, user appearances stored on storage device 123 and / or received on network 150 (FIG 1), side by side with the appearance of the normal mirror of a second test in structure 192, and to compare between the first and the second test.
[063] [063] Reference is made to FIGS. 3A, 3B and 3C, which schematically illustrate three sequential stages of comparison between appearances using an interactive system in accordance with some demonstrative embodiments of the invention.
[064] [064] As shown in FIG. 3A, the system user 100 can view a first test of a user appearance on the mirror display device 140 operating in its mirror mode. Controller 121 may receive, for example, from input device 124, a user input, which may include a request to use imaging device 130 to capture a first test of the user's appearance. As a result, imaging device 130 can capture an image of the user's first appearance test and storage device 123 can store the captured image.
[065] [065] As shown in FIG. 3B, user 111 can view a second test of their user appearance on mirror display device 140, which can operate in mirror operating mode. Then, when user 111 wants to view a previous appearance, for example, for comparison, controller 121 can receive user input via input device 124 requesting to view the first test. At this point, as shown in FIG. 3C, controller 121 can change the operating mode of mirror display device 140 to display mode using signals 141. Controller 121 can also control device 140 to display the first test. Therefore, when switching between mirror display device 140 operating modes, user 111 can compare the appearance of the user of the second test with the appearance of the user of the first test and / or any other user appearances previously stored on the storage device. 123.
[066] [066] Reference is now made to FIG. 4, which schematically illustrates a flowchart of a method that makes it possible to compare one or more diverse appearances in accordance with some demonstrative embodiments of the invention. Although the invention is not limited in this regard, one or more operations of the method of FIG. 4 can be performed by one or more elements of the system 100 (FIG. 1).
[067] [067] As indicated in block 410, the method may include, for example, setting the operating mode of a mirror display device. For example, user 111 (FIG. 1) can initially set the operating mode of the mirror display device 140 (FIG. 1) to mirror mode. Alternatively, monitor 140 (FIG. 1) can be designed to operate in mirror mode by a pattern whenever a new user registers with system 100 (FIG. 1).
[068] [068] As indicated in block 420, the method may also include, for example, posing in front of the mirror display device. For example, user 111 (FIG. 1) can pose in front of mirror display device 140 (FIG. 1) and check the user's appearance for a first test of, for example, clothing, shoes and / or any other clothing .
[069] [069] As indicated in block 430, the method can also include include, for example, capturing an image of the user's appearance from the first test. For example, user 111 (FIG. 1) can provide user command to device 120 (FIG. 1) by commanding imaging device 130 (FIG. 1) to capture an image of the user's appearance from the first test.
[070] [070] As indicated in block 440, the method may also include, for example, posing in front of the mirror display device in a different user appearance. For example, user 111 (FIG. 1) can change one or more items of the environment, such as, for example, furniture and / or clothing, pose again in front of the mirror display device 140 that can operate in mirror mode, and preview a second look for the user.
[071] [071] As indicated in block 450, the method may also include, for example, switching between the operating modes of the mirror display device. For example, user 111 (FIG. 1) can switch the mirror display device 140 (FIG. 1) between the mirror and the display mode. Likewise, user 111 (FIG. 1) may be able to compare between the user's appearance from the first test, and / or any other user appearances, for example, user appearances stored on storage device 123 and / or received on network 150 (FIG. 1), and the appearance of the user of the second test.
[072] [072] According to some demonstration realizations of the invention, user 111 (FIG. 1) can indicate, and / or station 110 (FIG. 1) may be able to store, for example, automatically, for each test parameters , for example, including purchase parameters, such as store name, store address, price, time and / or date of clothing testing, name of seller, and the like. User 111 (FIG. 1) can, for example, store captured images of the user's appearances on a removable or portable storage device, for example, as described above in reference to FIG. 1, and you can later review the user's appearance images while assigning each image to, for example, a specific and similar store. In addition, user 111 can define, and / or controller 121 (FIG. 1) may be able to generate, and / or store reminders on storage device 123, for example, alarms about, for example, discounts, end of season sales and the like.
[073] [073] Now we present discussions regarding the realization of the use of an image captured by a camera, for example, a photographic or digital video camera, and manipulation of the image in such a way that when it is projected on a screen it resembles an image of mirror, that is, an image that the user would have seen if the screen was really a standard mirror.
[074] [074] Figure 6 is a schematic illustration for an understanding of the following achievements, which includes a 640 digital screen, a 630 camera and a 645 computer image processor, which could create a convincing mirror experience. Ideally, the camera would be able to move on a parallel plane behind the screen, corresponding to the user's eye location to create a convincing eye-to-eye experience. However, it is not practical to place a camera behind a regular screen, simply because it will block the view of the camera. In theory, it is possible to overcome this problem using a semitransparent screen or several needle holes with several cameras, however, it can be very expensive and complex to implement. A simpler solution is to place the camera above the display screen and manipulate the captured image to mimic a mirror. Examples for the appropriate manipulations are provided below.
[075] [075] An adaptive FOV is needed in order to compensate for changes in the distance between the user and the screen, so that the user will see his image in the same size, as in a mirror. According to one realization, this is solved using the camera zoom (digital and / or mechanical). Traditional cameras have fixed FOV or mechanically tunable FOV. In order to create an adaptive FOV, that is, continuously variable, the system needs to manipulate the resolution, or control the zoom and focus of the camera in real time, based on user tracking.
[076] [076] In addition, the image needs to be turned vertically to support the right to left behavior of the mirror. This image transformation can be performed relatively easily by manipulating the pixel addresses of a digital image.
[077] [077] As explained in relation to Figure 1, the mirror can be smaller than the user and still show a full-body reflection. This can be achieved in combination with proper FOV selection and proper screen size selection. The idea is to project an image on the screen, which has a FOV that provides the ratio that the user sees at twice the distance to the mirror, in such a way that the entire body is visible even if the digital screen is less than the user's height. . This is exemplified by images A, B and C in Figure 6, showing an image of the user captured at different distances to the mirror, but showing the user's body in the same size by, among others, handling the FOV. In some embodiments, the image resolution also changes according to the user's distance from the mirror. For example, for short distances, an achievement may use an array of cameras and the image manipulator would use dots to reduce image distortion.
[078] [078] In order to support three-dimensional imaging, the configuration will require two cameras at a distance that corresponds to the distance between the user's eyes, or a camera with two effective virtual views. A convincing 3D experience will also require the implementation of an adaptive closed-loop method that can alter the creation of a 3D image as a function of distance. When the user looks at his reflection in the mirror he sees himself in 3D, although when he approaches the mirror or moves away, the angle of his eyes to this reflection changes, which also changes the depth of the 3D image.
[079] [079] As explained in reference to Figure 6, it can be very complex or expensive to place the camera behind the monitor at eye level. Therefore, in the following realizations a practical method is provided to implement the ideal system described above with a fixed camera / cameras mounted located on the perimeter of the screen and in front of the user. The main challenge is how to compensate for image distortions adaptively and in real time, in order to create a user experience similar to the one we would obtain in the ideal configuration. Figures 7 and 8 show some examples of the challenges of using the practical configuration with the camera placed above the digital screen, that is, not corresponding to the user's eye level.
[080] [080] Figure 7 represents what happens when the user approaches the mirror or moves away from it when using a camera mounted above the screen and pointed horizontally. Assuming that the system is calibrated to the central dynamic range, when the user is far from the image it is smaller or larger (scenario C in figure 7) and when the user is closer to the screen (scenario A), the user's image is larger and the camera's FOV cuts the user's image. In addition, when the user approaches the screen, the projection distortion becomes noticeable, which means that the user will not feel as if they are looking at themselves in a mirror.
[081] [081] To make it possible to capture the user's entire body at any distance, in one embodiment the camera is located at the top of the screen and tilted down to allow the maximum dynamic range of the user's movement in front of the screen. As shown in Figure 8, due to the camera being tilted down, the projection distortion is much greater and noticeable. The closer the user is to the camera, the more distorted their image will become. The nature of the distortion is mainly the projection distortion - the user looks smaller and the upper body looks bigger. In this configuration, the user's image also gets smaller as he moves further away from the screen. On the other hand, the camera's effective / usable FOV will cover a larger area in front of the screen, and will allow the user to move further away from the screen and still observe a full body image.
[082] [082] To enable the generation of a convincing (magnified) mirror experience from a camera located in front of the user and in spatial displacement of the screen, a computerized method is used to manipulate the captured image for its presentation on the screen. The computerized method can operate in real time or not, depending on the operating mode of the screen.
[083] [083] The computerized method receives the image captured by the camera as an input and performs an image transformation to correct the point of view and the field of view of the camera to adjust with the point of view that would occur with a conventional mirror. That is, when the user moves closer or farther from the mirror, the point of view that would have been reflected in the mirror would have been different from that captured by the camera. The achievements of the inventive computerized method incorporate an adaptive POV (point of view) and adaptive FOV modules based on tracking the user's position.
[084] [084] Specifically, the attempt to solve the virtual mirror problem only by turning the image vertically, as proposed in the prior art, is insufficient to create a mirror experience. When using a camera, as soon as the user approaches the camera / screen, the user's image becomes larger, and vice versa, as opposed to a conventional mirror where the user will always see himself about the same size with a little FOV different, regardless of your distance from the mirror. In addition, the camera approach introduces an image distortion that needs to be corrected in a much more advanced computerized method. When performing an adaptive POV and FOV before displaying the image on the screen, it results in an image that imitates a reflection of a mirror positioned at the screen location. In addition, for an even more convincing mirror experience, the following features can also be incorporated.
[085] [085] According to one embodiment, the computerized method also performs dynamic adaptive image stitching of images obtained from a plurality of cameras. According to this realization, the system incorporates several cameras, for example, positioned in different locations and / or based on different technologies / characteristics, in order to improve image resolution, image accuracy, extend the FOV of a single camera, reduce distortion for different users and different user orientation, and create a better model for the user's body mesh to compensate for camera distortion. For example, due to the image losing resolution as the user moves away from the screen (fewer pixels capture the user's image), it makes sense to increase the optical zoom and reorient the camera. The problem is that it occurs at the expense of decreasing the camera's FOV. In order to improve the resolution while maintaining the FOV, a dynamic dot or zoom can be implemented simultaneously.
[086] [086] As mentioned in the previous paragraph, to properly convey a mirror experience, the system can also implement an adaptive optical zoom. Adaptive optical zoom increases image / video quality / resolution, based on distance tracking, that is, continuous tracking of the distance to the user. In addition, it reduces the asymmetric distortion that can occur if the camera is mounted on the side of the screen.
[087] [087] In order to guarantee the accuracy of the image transposition, the platform can be calibrated in discrete pointers and the computerized method can insert and extrapolate the correct transformation in different positions in front of the screen. According to one embodiment, the projection distortion can be calculated analytically based on the user's tracking and camera location. According to another realization, image distortion is measured in front of the screen, and this measurement information is used instead of the direct projection calculation.
[088] [088] The computerized method is optimized to reduce the delay to the maximum possible using, for example, computation and / or parallel calculation of the transformation by distance in advance disconnected, such that when a distance to a user is measured, the transformation of mapping is ready beforehand. According to another realization, a simple approach without calibration is used, in creating an analytical transformation of a projection by the calculated / measured point of view of the user.
[089] [089] The following description is a high-level description of the modules that together provide the mirror image transformation. The video acquisition module performs: take the video, make improvements to the video, establish optimization of and control to obtain the best video quality under the limitations of available hardware. The geometric measurement module measures or estimates any combination of distance to the user, the user's height, identifying eye position and head position, performs 3D body mesh estimation, etc. The camera control module configures the camera for image / video quality and maximum resolution. In the case of several cameras, optimization in FOV to obtain the maximum resolution will be performed based on the user's distance from each camera.
[090] [090] The geometric transformation module captures the video frame by frame, and the relevant position information / geometric orientation of the user, and maps the raw images to the correct location and fills in the blank pixel if any. For example, the geometric transformation is performed to match the eyes as if the user is looking in the mirror. In other words, the computerized method calculates the geometric transformation on the right to distort the input video in such a way that the user will look at himself and have the feeling that he is looking into his own eyes. This transformation is useful in another situation as well. For example, when videoconferencing using computers, as the camera is positioned above the computer monitor or screen, the resulting image always shows the user looking downwards, since the user is actually looking at the computer monitor and not at the computer. camera. Using the geometric transformation revealed in such a situation would result in an image of the user as if he were looking at the camera, when in fact he is still looking at the screen. This would result in a more personalized conference environment. In the application of videoconferencing, geometric correction is similar to the application of a mirror by means of movement, although it is not required to cast the image from left to right.
[091] [091] The mapping transformation module can be implemented using several achievements. According to one realization, a scaling approach is used, which the geometric assumptions are used to make the scaling correction by distance to the user, in order to correspond to the size of the image that the user would see in a mirror. According to another realization, an image projection and scaling approach are used, in which, based on the user's distance, the spatial balance and the location of the eyes by distance and balance, projection transformation error between the user / eyes of the user. user and camera location are calculated or measured. Additional transformation can be performed to correct optical distortion.
[092] [092] Still according to another realization, a registration approach is used, which has been shown to provide very accurate results. According to this approach, the transformation is pre-programmed based on the true raw image disconnected recording technique from the balancing / projected / rear-oriented camera to an image that was obtained in front of the user when the user's eyes are staring at the camera. The reference image can be generated from multiple cameras with dots. For example, a reference camera will be located at a similar height and distance that will reduce most of the optical distortion, for example, at eye level and 2-3 meters from the user. The record will produce the best transformation to match the eyes and the entire body based on different pointers. In one example, several pointers, for example, white dots, in a 2D or 3D target calibration, for example, human / doll target, will form a two-dimensional grid around the central body. For example, circular stickers can be placed on the legs, chest, shoulders, eyes, etc. To improve accuracy, this record can be repeated for different user locations / positions in front of the screen to explain, for example, different spatial balances from the edge, different heights, different distances, etc. For each location, the best possible mapping is created, assuming several distortions (projection, cylinder, fisheye, or any combination, etc.) that can be corrected with different recording techniques (for example, projection, related transformation, similarity, polynomial, or any combination, etc.).
[093] [093] The achievements described above can be implemented in connection with the additional increase below. The image point can be used with multiple cameras that are located in order to improve the resolution quality, minimize distortion from a single camera, or increase the field of view. The sewing element of the image will be integrated by sewing based on the user's geometric location. Each camera will have its own transformation correction, since the balance regarding the location of the virtual eye will be different. The various cameras can also be used to generate a three-dimensional view to improve the user experience.
[094] [094] A three-dimensional infrared (3D IR) image can also be generated. Such an image can be used to generate an accurate virtual model of the user, which can then be used to allow API for changing virtual clothes and augmented reality. When using augmented reality, you can change the background, insert objects, change virtual clothes of different clothes, etc. This can also be used to analyze the user's body language and relate it to augmented reality applications, virtual clothing changes, games, videoconferences, etc.
[095] [095] As the system records the image, both photography and video, several analyzes can be performed on the images. For example, video analysis and / or store analysis can be used to collect information about the user or store performance, for example, how often he smiled, he enjoyed the experience, estimated age, gender, ethnicity, etc. Several entries can be used for the behavioral analysis platform that can help integrate eCommerce into the real proof experience, or as an addition to any other application. For example, the system can analyze the user's proven item / items and relate them to the eCommerce shopping, 3D printing and web inventory, eCommerce stylist, social network, etc.
[096] [096] Reference is made to Figure 9, which is a block diagram of an embodiment of the invention that performs image transformation to generate an image that mimics a mirror. The various system modules illustrated in Figure 9 can be implemented in the programmed general purpose computer, DSP, CPU, GPU, DSP / ASIC camera, DSP / ASIC screen, stand-alone computing device, FPGA card, DSP device, ASIC, parallel computing cloud, etc.
[097] [097] Block 930 represents one or more 1: n cameras, which transmit the images to the captured image module 932. Block 930 can include photographic and / or video cameras, IR camera, bi and / or three-dimensional camera layouts . A rangefinder, for example, electro-acoustic or electronic rangefinder, can also be included.
[098] [098] The image capture module 932 captures images transmitted from block 930, and can apply filters to improve image quality. In addition, you can crop or resize the image as needed to optimize the image. If multiple cameras are used, the 932 capture module applies image stitching as needed for better quality or wider effective field of view.
[099] [099] The 960 trigger event module is a parallel process that can obtain its input directly from the 932 image capture module, although it can also obtain images after the 934 eye matching module. The input can be optimized in size, bandwidth and speed to achieve the required functionality. Examples of elements that may reside in the 960 trigger event module include the following. Identify that a user is standing in front of the 930 camera, for example, based on subtracting the background and changes in a predefined zone, pattern recognition, etc. Measure the distance to the user by, for example, correlation between stereo cameras, 3D IR camera, etc. According to another realization, the distance to the user is estimated using a single camera measurement, when making some geometric assumption, for example, the user is approximately standing in front of the mirror and on a flat floor, so that the distance , the user’s height or the user’s theoretical point of view in a mirror, for example, can be deduced from measuring the location of the user’s shoes and the user’s spatial balance from the edge of the screen.
[0100] [0100] A facial recognition module can be included in order to facilitate the identification of the user interface. In operation, after registration the platform must save the information for each user, and as soon as the user is recognized by the system, the system can load user data, make updates, suggest items, etc. With facial recognition, the user does not need to identify himself, which saves time and increases ease of use. Facial recognition can be the triggering event for automatic registration of appearances. In some embodiments, the duration of registration for each session is predefined and can start as soon as facial recognition is successful. In other embodiments, remote control techniques can be used as a trigger, as a function in a mobile application or tablet / cell phone / other remote control devices.
[0101] [0101] Other elements that can be found in the trigger event module 960 include video analysis (to create meaningful feedback, for example, estimated age, gender, mood, other popular items, etc.), item recognition ( so that the system can easily list evidence items in the store, increase inventory and the eCommerce platform for better recommendation and faster verification and inventory planning), and gesture identification (for continuous user control without a tracking device) input).
[0102] [0102] The 934 eye-to-eye transformation module performs image transformation before displaying on the screen (s) 940. It takes the image as an input from the 932 capture module and, in addition, can receive the calculated distance or the user's actual distance and height or point of view. The transformation module for eye matching 934 calculates the required mapping. The required mapping can be calculated directly based on calculating the difference in the projection angle between the camera and the user's theoretical point of view, for example, corresponding to the scale and positioning of the required image. Alternatively, a true match approach that uses the factory calibration process can be used to create a very precise transformation between the camera and the calculated point of view in discrete locations in front of the mirror. In addition, the base images and the transformation that was created by another one or more parameters, for example, distance, height, pose, measurements taken at the factory, or in real time when the user is standing in front of the screen, too can be used.
[0103] [0103] Based on user distance or the combination of user distance and eye point of view, the 934 eye matching transformation module creates the transformation interpolation anywhere in front of the camera. Since the distance from different parts of the user’s body to the camera is not uniform, projection distortion creates the effect that the parts closest to the body, such as the head, are viewed by more pixels than the most distant parts of the body, like the legs. Consequently, the user appears with shorter legs and a larger head, that is, the parts closest to the body appear larger and the parts further away from the body appear smaller. This mapping is not linear, that is, each pixel represents a different length and width of area (dx, dy), so the pixel filling / decimation sub-sampling will have to be performed to maintain the axis ratio. The fill approach can be any interpolation (close, linear, cubic, polynomial, etc.) or more complex fill from a different camera projection. For example, cameras above and below the screen can complement and better estimate missing pixels or correct eye orientation. The additional option is to perform some of the analysis in direct optics (special lenses / press) that will correct most of the projection distortion and improve the resolution. Some of the resizing issues that affect quality can also be compensated for optically by the mechanical zoom.
[0104] [0104] A virtual / realized linen change module increases ideal 936 can be included to enhance the user experience. Module 936 obtains its input from the 934 Eye Matching module or from the 938 recording module. The 934 Eye Matching module performs an image transformation and will be explained in more detail below. The 934 Ocular Correspondence module can be used in several applications in addition to the mirror, such as video cameras, videoconferences, etc. In addition, the 936 module can obtain input elements to process from the CPU that maintains an active link with the inventory, a database, a 3D printing, eCommerce database, etc. When using the input data, the 936 module can process digital images to be merged with the transposed image. For example, module 936 can be used to change the color of an item based on actual proof and available inventory or user customization. It must be estimated that the processing of the 936 module is different from the normal virtual clothing change, in which the 936 module performs the processing on an item that is viewed while it is in the user's body. Therefore, for example, processing an item in a different color does not change the physical appearance of the item that the user is wearing, but only its color. Consequently, the user can feel the real item on his body, see how it really affects and changes the shape and appearance of his body, view real folds and stretches, etc. The 936 module can also be used to add accessories to the user's real image or add the ability to change real virtual clothes to the platform. Similarly, module 936 can be used to enlarge the background to change or create different environments to match the item to the user’s clothing, for example, beach bottom for swimwear, night club bottom for evening dress, etc.
[0105] [0105] The 938 video / photo recording module receives your video / photo from the 932 camera capture directly from the 934 Eye Matching module, or the 936 augmented virtual change module. In addition, it gets control signals from the 962 control module, which indicates when to start / stop recording, what information to store per user, and on which video / photo to process the additional transformation / enlargement disconnected, reality increase / virtual clothing change / quality, etc. The method that can be used for recording without loading system resources (CPU / BUS / GPU) can include encoding / decoding capabilities on your GPU cards.
[0106] [0106] In addition to having a local copy of the video / images in Mirror Memory (station), the images can be automatically replicated to a Cloud Server and can be automatically encoded to any required size.
[0107] [0107] The recording process can also modify the frame rate, compression, cropping, change the formats and modify the video and color effects.
[0108] [0108] In addition to recording, the system can optionally provide live transmission of the mirror video. For example, the system will be able to show on the web in real time transmission from any mirror, which means that any device will be able to watch the transmission, including mirror to mirror.
[0109] [0109] The 962 control module controls the operation of all other modules. The 962 control module configures all hardware and software elements in the system, including 930 cameras, 940 screen, DSP, etc. The control module 962 includes an interface, which serves as a link between the local station and the cloud-based eCommerce 950, other web applications 952, smartphone applications 954, etc. The information that will be recorded on the 938 recording module can be delivered to the user immediately by wireless / IR / wired signals, or can be delivered through user authorization to a cloud-based application similar to Facebook, Google, or others, or the company server that may be connected to the eCommerce application.
[0110] [0110] The factory calibration module 964 is used when the transformation depends on the actual image transformations. The geometry orientation of the camera to the screen can be determined / measured at the factory, or it can be measured in the field, and balances can meet calibration with the factory condition. In addition, the correction can be combined with the height or angle of configuration of the mirror in relation to the floor. The calibration process can be implemented as follows. The first step is to create a mirror reference of how the image should look in the mirror at different discrete distances, spatial orientation, different locations for reflection of the eyes (ie, theoretical point of view). The reference can be taken in several ways:
[0111] [0111] As soon as the reference image is obtained, another image is taken by placing the camera in its actual service location, for example, above the screen, which will be referred to here as the actual image. Then a record of the reference image and the real image is performed, by combining the pointers in the real image with those of the reference image. From the registration information, the transformation that best brings the image of distortion to the right of the eyes and body orientation is extracted. In addition, based on the actual installation geometry in the field, balancing or correction to the transformation can be added. Based on the user's distance and eye point of view, an interpolation of the transformation anywhere in front of the camera can be created.
[0112] [0112] Figure 11 is a block diagram illustrating a process according to an embodiment of the invention,
[0113] [0113] As soon as the target and reference images are obtained, the distorted target images that were obtained from the tilted camera are registered in the reference image (s) using the pointers, in order to represent as the user it would look in a mirror positioned at the screen location. The record output is a transformation operator, for example, a set of pointers, each having two coordinates: one in the output image and one in the reference image. Several transformations can be tested on different target and reference images. For example, the various distortion functions can be applied to obtain the best matching performance. The principle functions would be projection, resizing, XY translation (balancing) and movement from left to right. The best transformation combination is selected and the mapping transformation is created for each calibration point. This mapping transformation is the factory calibration according to which live images in the field would be transformed to imitate a mirror image. It must be estimated that the mapping can include different transformations for each pixel or each section of the live image, in order to provide the best representation of a mirror reflection.
[0114] [0114] As can be understood, step 1100 can be implemented at the factory, before sending the system. On the other hand, its procedure can also be performed in the field, especially when the camera and the screen are provided as separate elements, in such a way that in different situations, the camera can be positioned in different places in relation to the screen. In such circumstances, it may be beneficial to provide the system with a calibration target, for example, a dummy (target 3d) or plate with calibration markings (target 2d), such that the user may be able to perform the calibration on the field using a standard target. The system would then be pre-programmed to carry out the calibration process and generate the transformation mapping in the field.
[0115] [0115] As soon as the system is installed at its service location, the process is in step 1105, in which a user's live image or video is obtained and captured by, for example, a frame capture. At the same time, the user's location is measured in such a way that the user's point of view is determined. This can be done using IR measurement technology, or using available sensors, such as Kinect®, available from Microsoft®. In step 1115, if necessary, optionally, a scaling factor is applied to match the user's screen size and height.
[0116] [0116] In step 1120, the appropriate transformation mapping is selected, or a factory calibrated mapping is interpolated to create the correct mapping for the user's particular location in front of the screen. The transformation mapping is then applied by the Eye Matching Module to live video. In this context, the term “live” also refers to a video feed that was captured by a frame capture and can reside in storage before being sent to the monitor screen for presentation. In addition, if necessary due to the transformation, in step a1125, the missing pixels are filled by the interpolation technique. In step 1130, the image sent from storage to the screen to be displayed. It should be estimated that due to the commercially available fast processors, steps 1105-1130 can be performed in real time, in such a way that the user cannot perceive any delay in the presentation of the image on the screen, which is referred to here as a “real time” presentation. real ”or“ live ”.
[0117] [0117] The Eye Matching Module can be used to provide a more natural environment that most corresponds to daily interaction. For example, when making a video call using a PC, such as when using Skype®, as the camera is usually positioned above the monitor, the caller usually appears to be looking down from the camera, since the caller is actually looking to the screen. This provides for an unnatural conversation, since the interlocutors are not looking at each other while talking. The Eye Matching Module can be used in such an application when applying a transformation in such a way that the image displayed on the monitor looks as if the interlocutor is looking directly at the video camera, even if the interlocutor is actually looking at the screen and thus , move away from the video camera. The transformation can be carried out on both the sender's and recipient's computers. For example, a user using Skype® may have the Eye Matching Module installed on their computer, so that whenever the video camera is active for a video conference, the Eye Matching Module intercepts the signal from the video camera and applies the transformation before allowing the video to transmit to another user via the Skype application. When the other user receives the video signal, it has already been transposed so that the other user sees the image as if the interlocutor is looking directly at the video camera. The same that can be implemented are standard video conferencing systems, such as WebEx, Polycom, etc.
[0118] [0118] Figure 12 is a block diagram that illustrates the modules and processes to carry out the calibration and transformation of the image according to an embodiment of the invention. Block 1230 represents the image acquisition module, which can include one or more cameras or video, an IR camera or distance measurement sensor, a 3D camera layout, etc. The 1232 module controls the optimization settings of the camera, for example, to obtain adequate FOV, image of the appropriate zone in front of the screen, focus and / or resolution setting, etc. Module 1260 is the trigger event module, which can be used to identify the presence of a user in front of the screen and start the image acquisition and transformation process. Other times, the system can be configured as idle, showing only a background image on the screen until a user is detected.
[0119] [0119] The 1360 strap encompasses the calibration procedure, which will be described here for a system configured for a field installation in which the 1230 camera is positioned above the 1240 monitor screen, as also shown in Figure 1. The 1260 calibration can be carried out at the factory before shipping or in the field after installation. At 1261, a target with pointers is positioned, and at 1262, a set of reference images is obtained (using, for example, the camera at eye level and looking ahead) and a corresponding set of input images is obtained ( for example, using the camera in its field installation position above the screen and pointing down). In 1263, the pointers for each input image are combined with the corresponding pointers in the corresponding reference image in order to generate transformation vectors. Depending on the quality of the mirror representation sought, the vectors corresponding to the areas around a user's eyes are optionally adjusted in the
[0120] [0120] Using the combination of the pointers of the input images with the reference images, in 1265, the transformation parameters that lead to the best image correspondence can be determined and selected. The parameters that can be used for the transformation include the following. A highly effective transformation parameter is a slope transformation. Tilt transformation corrects image distortion caused by the camera being tilted down. The tilt transformation modifies an input image obtained with the camera looking down and transforms the image to look as if the camera is pointing in a horizontal straight line (The video engine can also correct azimuth error). Another transformation parameter is the elevation translation. The elevation spatial translation transformation linearly modifies the balance in an image obtained with a camera positioned off the screen, for example, above the screen, and transforms the image to look as if the image were obtained from the location of the screen at the level of the user's eyes (match the height of the eyes to the image at the height of the screen. Horizontal translation can also be applied to correct misalignments between the camera and the center of the screen). Another important transformation for mirror imitation is the scale. As explained above, in a real mirror reflection, the user sees very little change in the size of the body image as he approaches the mirror. Conversely, in a camera image, the size changes considerably with the distance to the camera. A scale transformation reduces the effect of changing the size with the distance to the screen, so that the user sees the almost constant size of himself on the screen, regardless of the distance to the screen. Depending on the camera's position and FOV, any x-y translation of the image may also be required.
[0121] [0121] In practice, it must be shown that the realization of tilt, elevation and scale transformations provides a very convincing image that only needs to be inverted from right to left to very effectively imitate a mirror reflection. For further improvements, a cylinder effect and a fisheye transformation can also be used. In addition, depth perception can be added to the projected image by adding lighting and shading, both artificially when operating on the intensity and / or color of the image pixels, and physically when controlling the lighting elements, such as the strategic position of the LEDs on the screen or user. In addition, a 3D effect can be created both with the use of 3D glasses and the use of 3D technology without glasses. In 3D technology without glasses, different sets of images are designed for each eye. As in the described realizations the distance to the user and the location of the user's eyes are measured or estimated, it makes it easy to project a 3D image for the user by projecting different sets of images for each user's eye.
[0122] [0122] In step 1266, a transformation mapping is generated and stored. This transformation mapping will be used in 1234 for transformations of live video feeds of image 1230, using the estimated distance to the user. Depending on the operation of the transformation mapping, pixel padding may be required. For example, using a camera at the top of the screen and tilted downward would require tilt and elevation transformations, which would provide an image that can be enlarged using pixel fill. This can be done on 1237, and the resulting image would be displayed on the screen in
[0123] [0123] Figure 13 illustrates another realization, in which the calibration and transformation mapping are performed in the field after installation of the system. In the example in Figure 13, camera 14 is positioned above video screen 12 and is tilted downwards. Camera 14 and screen 12 are in communication with and are controlled by controller 18. The user is provided with an input device 16, which can be in communication with controller 18 using wired or wireless technology. The user is standing in an appropriate location, for example, 2-3 meters from the screen 12, and allows the calibration process by, for example, entering a “calibrate” command from the remote input device 16. During the process calibration, a live video transformation is fed from camera 14 to controller 18, and the controller “moves” the image on the central vertical axis (left to right) before displaying it on screen 12. When using the device input, the user acts on the various transformation functions. For example, the input device can include input buttons to correct tilt, elevation and scale, as schematically illustrated in Figure 13. User input commands are transmitted to controller 18, which then apply to real-time transformations to the power video, so that the image is changing in real time in front of the user. The user can change the quantity of each transformation function until the user sees a convincing image on the screen, at which point the user can press an input button that indicates that the calibration is complete. Controller 18 then saves the appropriate calibration parameters and uses these transformation parameters for all future video feeds.
[0124] [0124] Another feature illustrated in Figure 13 is the distance calculation. As the position and inclination of the camera 14 in relation to the screen 12 are known (for example, due to the fixed support securing the camera at the top of the screen), the image captured by the camera 14 can be used for triangulation and distance calculation. For example, when a user appears in a frame of a live video stream captured by the camera 14, the triangulation to the user's shoe tips can be performed to calculate the user's distance to screen 12. This operation can be performed at each fixed number, n, of frames, so that the user's distance from the screen can be continuously updated to capture the user's movement. The background can be updated as soon as there is no user in front of the mirror - adaptive background method.
[0125] [0125] An additional feature illustrated in Figure 13 is the use of lighting to create depth and / or atmosphere. Specifically, several light sources 17, for example, LED arrays, can be placed in different locations on the screen 12 and can be controlled by the controller 18. Light sources 17 can be operated in order to create more depth in the image, for example, adding shadows and well-lit areas. The light sources 17 can be of different colors, in order to improve the existing illumination and create the appropriate general “temperature” of the displayed image. The light sources can also be tuned according to the distance to the user, in order to create a consistent image and remove the artifacts created by the store lighting. Alternatively or additionally, color and lighting changes can be performed by the controller directly on the digital image received from camera 14. For example, the color transformation can be used to enhance the mirror appearance of the image, for example, operate on parameters that provide an image that is: glossy, wavy, pointed, matte, metallic, etc. Controller 18 can also add a virtual light point and / or shadows to the image to create more depth to the image. In addition, an anti-reflective coating is provided in front of the screen in order to remove and reduce the reflections that are normally associated with flat panel displays. The LED array can be connected to the light sensor, and / or the light temperature sensor, and can be preset to remain at particular light levels and light temperatures, and will adjust automatically.
[0126] [0126] According to other realizations, the 3D image is implemented with two cameras at a distance D from each other. Distance D is calculated as an average distance between human eyes, usually referred to as an interpupillary distance (IPD), which for adults is around 54–68 mm. The transformation will be calculated from the registration of the image obtained from the input camera at a distance D with an image obtained from two reference cameras (base) cameras at a spatial distance that is also similar to the distance between the user's eyes. The basis for registration can be taken using a Kinect® or any 3D IR camera. Google glasses can also be an option for calibration, in which the user / target using Google glasses will take a picture of you in front of a normal mirror. Since the FOV is fixed, the only thing that needs to be done is to resize the image to its true size. An image transformation module, such as controller 18, can also apply image scaling to zoom in / out to allow the user to fit the image to the screen or to focus somewhere.
[0127] [0127] As can be understood, although some of the above achievements relate to the construction of transformation mapping when recording images or other empirical methods, alternatively, the direct calculations of distortion between the camera and the theoretical point of view of the user can be performed based on the analysis. Using analysis avoids the need for any registration procedure. Instead, the analytical mapping transformation is created to correct distortions.
[0128] [0128] Yet another feature illustrated in Figure 13 is the use of the Internet or cloud for provisioning services in relation to transformation and mirror presentations. For example, according to one realization, the transformation can actually be performed in the cloud, if there is a sufficiently fast connection. In this case, power from camera 14 can be fed to server 181, which applies the transformation and sends it back to controller 18 to display on monitor screen 11. According to another embodiment, images from camera 14 or transformed images from camera 14 can be stored in the cloud and transmitted to devices, such as smartphone 183, tablet 187, and another monitor screen 189, such as flat-screen TV. This can be done in real time while the user is trying on a suit, so that the user can share the experience and gain input from others at remote locations. In addition, once the user leaves the store, the user can have access to all images obtained when using a smartphone, tablets, PC, etc. Note that to facilitate the user interface, as shown on monitor 189 in Figure 13, thumbnail images from various tests can be displayed with the current image to make it easier for the user to choose between views.
[0129] [0129] Figure 14 illustrates an achievement for extracting data from the image of the 1430 camera. The trigger event module 1450 detects the presence of a user and activates the process for extracting data. A 1432 camera optimization module can be used to control the camera to obtain the best image for data extraction. In this realization, when the camera 1430 is located above the screen and pointed down, you can observe the user's shoes from, for example, 1 m to 4 m from the screen. However, as the camera is pointed down and the shoes are at a high distance from the camera compared to the user's head, the distortion is at its maximum. With the transformation mapping discussed above, a convincing distortion improvement is achieved at all distances.
[0130] [0130] Additionally, according to this realization, the user's image is separated from the background in 1462. In 1464, the central mass is calculated by multiplying the user's binary image only by the index matrix, and then taking the measurement of the index of pixels. In addition, the minimum body pixel below the central mass (J, k) is determined by opening a window around k (central mass) and finding the lower active index - which is assumed to represent the margin of the shoes. In 1468, the user's height is calculated by finding the top of the head and, based on the distance of the camera's resolution, FOV, and the geometric slope of the camera, the user's height is estimated.
[0131] [0131] Figure 15 illustrates an embodiment in which the stitching of images from n cameras is performed. Stitching is particularly beneficial for improving image resolution and FoV when the user is approaching the screen. In the realization of Figure 15, the live feeds 1530, 1532,
[0132] [0132] Each camera can be subjected to separate camera optimization, 1533, 1536, and the feeds for each camera would undergo a different geometric transformation, 1560, 1562, since the location and orientation of the cameras are different in relation to the point user view, 1564, 1562. The 1570 overlay decision is the decision element for determining where it will be best to perform the seam. If we are dealing with a scenario with two cameras, one located above and one below the screen, the overlap decision will indicate the index for the best cut. As we have a distance measurement constantly, it is possible to keep the sewing thread almost fixed as soon as the user approaches the mirror. For maximum performance, the 1572 seam needs to be optimized based on the user's distance from the cameras. 1574 smoothing is required to filter the lines that can be generated at the intersections of different images from different cameras. The images from different cameras would be slightly different due to differences in absolute quality and lighting, and due to the different distortion correction for each camera. Interleaving between images on multiple lines can be used to facilitate the lighting effect. The stitched image is produced in the 1576.
[0133] [0133] In the achievements described above, the transformation mapping corrected the image obtained by the cameras and provided an image that mimics an image that would be reflected from a mirror. The following performance, illustrated in Figure 16, improves the quality of the eye presentation. The idea behind this realization is to replace the eye area, or just the eyes, with modified pixels that will create the sensation of margin view - that is, presenting the eyes to their fullness, which imitates the user looking directly at themselves in the mirror . As can be understood, the user would be looking directly at himself projected on the screen, but, as the camera is not positioned at eye level that points horizontally, the user's eyes, as captured by the camera, would not be looking straight ahead . Figure 16 corrects this problem.
[0134] [0134] Most of the elements in Figure 16 are similar to those in Figure 12 and have the same reference, except that they are in the 16xx series. These elements will not be described here again. On the contrary, attention is directed to the 1680 element, which is responsible for eye correction. As the scaling has already been corrected by the previous elements, what needs to be done at this stage is to repair and / or reconstruct the internal elements of the eyes, for example, pupil, iris, etc., in order to present the appropriate image.
[0135] [0135] The 1680 module first locates the eyes in the image and ensures that the user actually looks at the screen.
[0136] [0136] Another feature that can be implemented in any of the achievements described above is the frequency of updating the transformation parameter. That is, when the user may prefer a stable image when standing in front of the mirror-imitating screen. Changing the transformation mapping mechanism while the user is standing still can create an uncomfortable feeling. On the other hand, when the user is moving faster or forward from the screen, it may be good to update the transformation parameters faster. Likewise, according to this characteristic, several zones of behavior are established. According to one embodiment, the behavior zones are established according to distance only. For example, when the user is moving more than a defined distance value, the video engine is updated. According to one realization, if the user moved less than, said 25 cm, the transformation mapping is performed at a lower speed, say every x seconds, while if the user moved more than 25 cm, the transformation mapping is updating the parameters at a second higher speed. According to another embodiment, the zones are defined according to the user's movement speed. For example, if it is determined that the user is moving less than x cm / s, then a first transformation is implemented that updates the frequency, although the user moves faster, a second faster update frequency is used.
[0137] [0137] Thus, as can be understood from the above, although the update of the real-time video engine can be used for the user's image, the transformation of presentation on the monitor screen may not necessarily be a change in real time , depending on user behavior. For example, the user is relatively still or moves very slowly, the transformed parameters can vary in almost real time. Then, when the user starts to move faster or further, the video engine can be updated more frequently.
[0138] [0138] It should be estimated that the achievements described above can be implemented using a digital camera (for example, video) and a monitor, in which the images from the camera are fed to a processor. The processor applies transformation mapping to the images and displays the images in both a display and mirror mode on the monitor. From the description above, it must be estimated that the mirror mode can be implemented by displaying the live video feed monitor, as modified by the transformation mapping. That is, in mirror mode, the image being displayed on the monitor is an image / video of the user obtained in real time. Conversely, in the display mode the image being displayed is an image / video obtained in the past and obtained from storage. The stored images can be raw images such as camera power or already transformed images. In any case, the previous images being displayed in a display mode are transformed images. Therefore, if the stored images are raw images, the processor applies the transformation mapping before displaying the images on the monitor.
[0139] [0139] Whether the system operates in a display mode or a mirror mode, it can be considered a time issue: during mirror mode, the image being shown on the monitor is a transformed image of what the camera sees at that particular moment (or in a de minimis or imperceptible previous moment), while during the display mode, the image being shown is a transformed image of what the camera or other camera saw before and is different from what the camera sees now. This question also refers to perception: during mirror mode, as the monitor shows a transformed image of what the camera now sees, the user who watches the monitor has the perception that he is looking into a mirror, while in view mode , the user who watches the monitor has the perception of watching a video of events that occurred in the past.
[0140] [0140] It must be estimated that the system can be implemented separately and independent of the monitor screen. For example, in some locations (for example, fitting room), only the monitor screen can be installed, without any camera. The monitor is configured to communicate with the system in order to download and display the stored images. The user can interact with the monitor to view previous images obtained, for example, to compare with the current clothing. As another example, all the images obtained can be uploaded to the cloud, so that a user can view the images on a PC or mobile device, as when using an app on a tablet.
[0141] [0141] As can be understood from the description above, several embodiments of the invention provide an image transformation device which comprises an image input port for receiving digital images from a camera; a transposed image output port for transposed output images to be displayed on a monitor or stored in memory; and a transposition module that is programmed to receive images from the entrance door and applies a transformation to the images, wherein the transformation includes at least the movement of the image on a vertical axis in order to invert the right and left sides of the image; applying a transformation mapping to the image to modify the image so that it appears to imitate a mirror reflection; resizing the image to reduce variations caused by changes in the distance from the object to the camera. In other embodiments, a program is provided that when operated on a computer, the computer transposes a digital image obtained from a camera, so that the transposed image resembles a mirror reflection, the transposition that includes at least the movement of the image on a vertical axis in order to invert the right and left sides of the image; applying a transformation mapping to the image to modify the image so that it appears to imitate a mirror reflection; resizing the image to reduce variations caused by changes in the distance from the object to the camera. The program can be operated on any general purpose computer, such as a server, a PC, a tablet, a smartphone, etc.
[0142] [0142] According to other realizations, a system is provided, which allows the user to view his own image on a digital screen by having a projection of a digital screen of an image that mimics a mirror image. The system includes a digital camera that generates a user image transmission; a controller that has an image input port for receiving the transmission of images from the camera and applying a transformation to the images to generate transformed images that mimic a user's reflection in a mirror; the controller that has an output port to produce a transmission of transposed images to be displayed on a monitor; and a storage facility to store the transformed images. The controller also has an Internet connection to load the transmission of images transposed to the cloud. The system also has clients that make it possible to download and view the transformed images from the cloud.
[0143] [0143] In several embodiments, a controller applies a transformation to an image transmission, which includes: movement of the image on a vertical axis in order to invert the right and left sides of the image; applying a transformation mapping to the image to modify the image so that it appears to imitate a mirror reflection; and resizing the image to reduce variations in the distance from the object to the camera. The mapping essentially reassigns new addresses to each pixel in the original image. Transformation mapping produces an image that appears to have been obtained from a camera positioned at eye level and pointing horizontally. The transformation mapping comprises at least one tilt transformation and an elevation transformation, where the tilt transformation transforms the image to resemble the camera's tilt and the elevation transformation transforms the image to resemble changing the camera's elevation.
[0144] [0144] Transformation mapping can be one that includes angle, slope, azimuth, scale, spatial translation (ie, linear elevation balancing or horizontal balancing), etc. The final transformation is a multiplication of the matrix of the individual transformation to generate the individual distortions.
[0145] [0145] In other embodiments, a system is provided to increase the videoconference or video call. The system includes a transformation mechanism to transform a transmission of images obtained from a video camera positioned on the periphery of a monitor screen, so that the images of a user who looks directly at the screen portray a user who does not look directly to the camera. The transformation mechanism receives the transmission of images and transforms the images in order to provide images that show that the user looks directly at the camera. The transformation includes any combination of correction for angle, inclination, azimuth, scale, spatial translation (i.e., linear elevation balancing or horizontal balancing), etc. The transformation may also include replacing the eye area, or just the eyes, with modified pixels that will create the feeling of lateral vision - that is, that presents the eyes with their fullness that mimics the user who looks directly at the camera.
[0146] [0146] In several implementations, the method also includes calculating the distance for a user that appears in the image by tracking the distance between the user's eyes or the size of the head, and scaling the image accordingly. For example, according to one realization, the system is programmed with an expected distance between the user's eyes, that is, interpupillary distance (IPD), at an average distance from the camera. The 95% of adult males in the USA have an IPD of 70 mm, while for women it is 65 mm. When a user is detected in front of the camera, the system can first determine whether it is a man or a woman, or simply go directly to the distance measurement program and use the average IPD number, for example, 68 mm. For measurement of variation, the system identifies the pupils in the image, and scales the image to match the expected IPD. As the video images continue to transmit, the system scales the image to keep the IPD constant and that corresponds to the expected IPD. Thus, when the user moves away from or closer to the camera, the user's size in the image projected on the monitor screen would remain almost the same, thus imitating a reflection in a mirror. As can be understood, the system can use other standard measurements, such as the distance between the ears, etc., but using the eyes would make it simpler since the system can recognize the eyes quite easily. However, if the user wears masks, the system may have to resort to other measurements based on other parts of the body. This change can be performed dynamically, that is, if the system meets the eyes, it uses IPD, but otherwise, it uses other parts of the body.
[0147] [0147] In addition, effect filters can be applied, such as lighting filter effect, reflective texture effect, filter and color spectrum to create the metallic feel, etc. Similarly, the camera's shutter time, sensor gain, white balance, and any of their combinations can be controlled to modify the resulting image. In several realizations, these parameters are controlled based on a region of dynamic interest (Dynamic RIO), so that the changes applied to the parameters refer only to a region selected in the image, not the whole image. For example, parameters can be modified and updated based on distance to the user and limited to an ROI that is a window around the user. For example, the user's image can be separated from the background, and the parameters applied only to the pixels that belong to the user's image.
[0148] [0148] In additional implementations, the enhanced real-time video effect is implemented by, for example, recording a video with a high resolution camera at a first frame rate, and manipulating the display at a faster frame rate so as to smooth the video. In addition, the received video transmission background can be replaced by an artificial background stored in the system. Additional video processing can be used to add or change the color or texture of an accessory or other element in the image. In addition, for videoconferencing and other applications, the transformation can be performed without vertical movement of the image.
[0149] [0149] Although certain features of the invention have been illustrated and described here, many modifications, substitutions, alterations and equivalents can occur to those skilled in the art. Therefore, it should be understood that the appended claims are intended to cover all such modifications and alterations that fall within the true spirit of the invention.
权利要求:
Claims (20)
[1]
1. METHOD TO OPERATE A SYSTEM THAT HAS A MONITOR, A CAMERA AND A PROCESSOR, in order to display an image that mimics a mirror on the monitor, when performing the unordered steps, characterized by understanding: obtaining a digital image from a camera; launching the image on a vertical axis in order to invert the right and left sides of the image; applying a transformation mapping to the image to modify the image in such a way that it appears to imitate a mirror reflection; resizing the image to reduce variations caused by changes in the distance from the object to the camera; image display on the monitor after launching, transform mapping and resizing.
[2]
2. METHOD, according to claim 1, characterized by the transformation mapping transmitting an image that appears to have been obtained from a camera positioned at eye level and pointing horizontally.
[3]
3. METHOD, according to claim 1, the transformation mapping being characterized by comprising at least one slope transformation and one elevation transformation.
[4]
4. METHOD, according to claim 3, characterized by the tilt transformation modifying an input image obtained with a camera pointing non-horizontally and transforming the image to look as if the camera was pointed directly at the horizontal.
[5]
5. METHOD, according to claim 3,
characterized by elevation transformation modifying an input image obtained with a camera positioned at arbitrary height and transforming the image to look as if the camera is positioned at the user's eye level.
[6]
6. METHOD, according to claim 1, characterized by the step of resizing the image being carried out integrally to the step of applying the transformation mapping.
[7]
7. METHOD, according to claim 1, characterized by still comprising the steps: obtaining a reference image; registration of the digital image with the reference image; use of data obtained from the record to generate the transformation mapping.
[8]
8. METHOD, according to claim 1, characterized by further comprising the application of a translation transformation to move the image within the display area of the monitor.
[9]
9. METHOD, according to claim 1, characterized by changing the illumination intensity of the groups of pixels or individual pixels of the monitor in order to intensify the shading or illumination in the image.
[10]
10. METHOD, according to claim 1, characterized in that it comprises the application of at least one of a transformation with a cylinder effect and a fisheye transformation.
[11]
11. METHOD, according to claim 1, characterized by comprising:
determining the distance to a user that appears in the image; and variation of image resolution according to distance.
[12]
12. METHOD, according to claim 1, characterized by still comprising a step of filling in, in which the pixel image is extrapolated into empty pixels left after the application of the transformation mapping.
[13]
13. METHOD, according to claim 1, characterized by still comprising the steps: obtaining a second digital image from a second camera; launching the second image on a vertical axis in order to invert the right and left sides of the second image; applying a transformation map to the second image to modify the second image in such a way that it appears to imitate a mirror reflection; resizing the second image to reduce variations caused by changes in the distance from the object to the second camera; joining the image and the second image after launching, transform mapping and resizing to obtain a join image; display of the join image on the monitor.
[14]
14. METHOD, according to claim 13, characterized by the point of view of the camera and the second camera overlapping and in which the step of the overlapping decision to determine the best location in the overlapping parts of the view to perform the junction.
[15]
15. METHOD, according to claim 14, characterized by further comprising a smoothing operation to correct artifacts caused by the joining step.
[16]
16. METHOD, according to claim 1, characterized by still comprising the steps: obtaining a second digital image from a second camera; launching the second image on a vertical axis in order to invert the right and left sides of the second image; applying a transformation map to the second image to modify the second image in such a way that it appears to imitate a mirror reflection; resizing the second image to reduce variations caused by changes in the distance from the object to the second camera; display of the image and the second image after launching, transformation mapping and resizing to obtain a three-dimensional image on the monitor.
[17]
17. METHOD, according to claim 1, characterized by the unordered steps being performed in a series of images from a video to the camera.
[18]
18. METHOD, according to claim 17, characterized by further comprising: determining the distance for a user that appears in the live video; and variation of the speed at which the unordered steps are performed in the series of images according to the distance.
[19]
19. METHOD, according to claim 17, characterized by further comprising the steps of storing live video on a digital storage device and selectively operating the system in one of the mirror mode or display mode, in which during display of mirror mode on the monitor transformed images of what the camera observes at that particular moment, while during display of the display mode on the monitor the transformed images of what the camera observed in the past and obtained from the digital storage device.
[20]
20. METHOD, according to claim 1, characterized by also comprising the calculation of distance for a user that appears in the image when identifying the user's shoes and triangulation for the shoes.
类似技术:
公开号 | 公开日 | 专利标题
AU2019246856B2|2021-11-11|Devices, systems and methods of capturing and displaying appearances
US8982109B2|2015-03-17|Devices, systems and methods of capturing and displaying appearances
RU2668408C2|2018-09-28|Devices, systems and methods of virtualising mirror
US10527846B2|2020-01-07|Image processing for head mounted display devices
US10873741B2|2020-12-22|Image processing apparatus and method
CN105992965B|2018-11-16|In response to the stereoscopic display of focus shift
US7948481B2|2011-05-24|Devices, systems and methods of capturing and displaying appearances
US10085008B2|2018-09-25|Image processing apparatus and method
US10417829B2|2019-09-17|Method and apparatus for providing realistic 2D/3D AR experience service based on video image
US20200005386A1|2020-01-02|Systems and methods for virtual body measurements and modeling apparel
JPWO2016158729A1|2018-02-15|Makeup support system, measuring device, portable terminal device, and program
Wang et al.2015|An intelligent screen system for context-related scenery viewing in smart home
Chappuis et al.2014|Subjective evaluation of an active crosstalk reduction system for mobile autostereoscopic displays
US20210264684A1|2021-08-26|Fitting of glasses frames including live fitting
US20210192606A1|2021-06-24|Virtual Online Dressing Room
同族专利:
公开号 | 公开日
EP2936439B1|2019-02-20|
AU2013361507A1|2015-07-09|
JP2016509683A|2016-03-31|
IL239508A|2018-11-29|
AU2019246856A1|2019-10-31|
KR102265996B1|2021-06-16|
KR20150102054A|2015-09-04|
IL239508D0|2015-08-31|
ES2718493T3|2019-07-02|
JP6441231B2|2018-12-19|
RU2656817C2|2018-06-06|
AU2013361507A2|2015-07-16|
AU2019246856B2|2021-11-11|
RU2015127905A|2017-01-25|
EP3404619A1|2018-11-21|
WO2014100250A2|2014-06-26|
CN105210093B|2021-06-08|
WO2014100250A4|2014-10-09|
WO2014100250A3|2014-08-14|
RU2018118815A3|2021-09-08|
EP2936439A2|2015-10-28|
RU2018118815A|2018-11-05|
CN109288333A|2019-02-01|
EP2936439A4|2016-08-03|
CN109288333B|2021-11-30|
CN105210093A|2015-12-30|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US5572248A|1994-09-19|1996-11-05|Teleport Corporation|Teleconferencing method and system for providing face-to-face, non-animated teleconference environment|
JP2947726B2|1995-03-01|1999-09-13|鹿島建設株式会社|Image system for remote control support|
JPH112859A|1997-06-12|1999-01-06|Minolta Co Ltd|Camera|
JP3232408B2|1997-12-01|2001-11-26|日本エルエスアイカード株式会社|Image generation device, image presentation device, and image generation method|
US6417850B1|1999-01-27|2002-07-09|Compaq Information Technologies Group, L.P.|Depth painting for 3-D rendering applications|
JP2000306092A|1999-04-16|2000-11-02|Nadeisu:Kk|Mirror realized by digital image processing and medium with built-in program for making computer perform the processing|
WO2001095061A2|1999-12-07|2001-12-13|Frauenhofer Institut Fuer Graphische Datenverarbeitung|The extended virtual table: an optical extension for table-like projection systems|
JP3505575B2|2001-03-23|2004-03-08|独立行政法人産業技術総合研究所|Digital mirror device|
US20020196333A1|2001-06-21|2002-12-26|Gorischek Ignaz M.|Mirror and image display system|
AT375570T|2002-06-10|2007-10-15|Accenture Global Services Gmbh|INTERACTIVE ANPROBERAUM|
JP4154178B2|2002-06-21|2008-09-24|キヤノン株式会社|Video camera|
JP2004297734A|2003-03-28|2004-10-21|Aruze Corp|Electronic mirror system|
JP2005010356A|2003-06-18|2005-01-13|Pioneer Electronic Corp|Display device and image processing system|
US20050047629A1|2003-08-25|2005-03-03|International Business Machines Corporation|System and method for selectively expanding or contracting a portion of a display using eye-gaze tracking|
CN101156434B|2004-05-01|2010-06-02|雅各布·伊莱泽|Digital camera with non-uniform image resolution|
US7171114B2|2004-07-12|2007-01-30|Milton Curtis A|Mirror-mimicking video system|
US8982109B2|2005-03-01|2015-03-17|Eyesmatch Ltd|Devices, systems and methods of capturing and displaying appearances|
US7948481B2|2005-03-01|2011-05-24|Nissi Vilcovsky|Devices, systems and methods of capturing and displaying appearances|
JP2007006016A|2005-06-22|2007-01-11|Sharp Corp|Imaging equipment|
US20070040033A1|2005-11-18|2007-02-22|Outland Research|Digital mirror system with advanced imaging features and hands-free control|
JP4297111B2|2005-12-14|2009-07-15|ソニー株式会社|Imaging apparatus, image processing method and program thereof|
EP1868347A3|2006-06-16|2010-07-14|Ericsson AB|Associating independent multimedia sources into a conference call|
US8139122B2|2007-08-20|2012-03-20|Matthew Rolston Photographer, Inc.|Camera with operation for modifying visual perception|
CN101779460B|2008-06-18|2012-10-17|松下电器产业株式会社|Electronic mirror device|
JP5074322B2|2008-08-05|2012-11-14|オリンパス株式会社|Image processing apparatus, image processing method, image processing program, and imaging apparatus|
JP2010087569A|2008-09-29|2010-04-15|Panasonic Electric Works Co Ltd|Full-length mirror apparatus|
US8416282B2|2008-10-16|2013-04-09|Spatial Cam Llc|Camera for creating a panoramic image|
CN101383001B|2008-10-17|2010-06-02|中山大学|Quick and precise front human face discriminating method|
CN101770352B|2008-12-30|2012-12-26|广达电脑股份有限公司|Electronic immediate imaging system and method|
BR112012005222A2|2009-09-11|2019-09-24|Koninklijke Philips Electrnics N. V.|method for an image processing system, image processing system and computer program product|
JP2011146857A|2010-01-13|2011-07-28|Canon Inc|Image processor, image processing method and program|
WO2011148366A1|2010-05-26|2011-12-01|Ramot At Tel-Aviv University Ltd.|Method and system for correcting gaze offset|
KR101852811B1|2011-01-05|2018-04-27|엘지전자 주식회사|Display device and method for controlling thereof|
US8452081B2|2011-01-11|2013-05-28|Eastman Kodak Company|Forming 3D models using multiple images|
US8767030B2|2011-04-07|2014-07-01|Tara Chand Singhal|System and method for a grooming mirror in a portable electronic device with a user-facing camera|
JP5875248B2|2011-04-27|2016-03-02|キヤノン株式会社|Image processing apparatus, image processing method, and program|
JP2012244196A|2011-05-13|2012-12-10|Sony Corp|Image processing apparatus and method|US8982110B2|2005-03-01|2015-03-17|Eyesmatch Ltd|Method for image transformation, augmented reality, and teleperence|
US8976160B2|2005-03-01|2015-03-10|Eyesmatch Ltd|User interface and authentication for a virtual mirror|
US8982109B2|2005-03-01|2015-03-17|Eyesmatch Ltd|Devices, systems and methods of capturing and displaying appearances|
US9269157B2|2005-03-01|2016-02-23|Eyesmatch Ltd|Methods for extracting objects from digital images and for performing color change on the object|
US11083344B2|2012-10-11|2021-08-10|Roman Tsibulevskiy|Partition technologies|
RU2612328C2|2014-04-04|2017-03-07|Сергей Евгеньевич Денискин|Training game system|
CN105426376B|2014-09-22|2019-04-23|联想有限公司|A kind of information processing method and electronic equipment|
KR20170093108A|2014-09-24|2017-08-14|프린스톤 아이덴티티, 인크.|Control of wireless communication device capability in a mobile device with a biometric key|
EP3227816A4|2014-12-03|2018-07-04|Princeton Identity, Inc.|System and method for mobile device biometric add-on|
US9858719B2|2015-03-30|2018-01-02|Amazon Technologies, Inc.|Blended reality systems and methods|
JP6461679B2|2015-03-31|2019-01-30|大和ハウス工業株式会社|Video display system and video display method|
KR101692755B1|2015-05-08|2017-01-04|스타일미러 주식회사|A system and method for mirror system sharing photos with two-way communication|
US9930248B2|2015-11-17|2018-03-27|Eman Bayani|Digital image capturing device system and method|
CN105472308A|2015-12-14|2016-04-06|湖北工业大学|Multi-view naked eye 3D video conference system|
WO2017108702A1|2015-12-24|2017-06-29|Unilever Plc|Augmented mirror|
CN108475107B|2015-12-24|2021-06-04|荷兰联合利华有限公司|Enhanced mirror|
EP3394709B1|2015-12-24|2021-02-17|Unilever Plc.|Augmented mirror|
EP3403217A4|2016-01-12|2019-08-21|Princeton Identity, Inc.|Systems and methods of biometric analysis|
US10304002B2|2016-02-08|2019-05-28|Youspace, Inc.|Depth-based feature systems for classification applications|
CN108139663A|2016-03-03|2018-06-08|萨利赫·伯克·伊尔汉|Smile mirror|
WO2017172695A1|2016-03-31|2017-10-05|Princeton Identity, Inc.|Systems and methods of biometric anaysis with adaptive trigger|
US10366296B2|2016-03-31|2019-07-30|Princeton Identity, Inc.|Biometric enrollment systems and methods|
US10339595B2|2016-05-09|2019-07-02|Grabango Co.|System and method for computer vision driven applications within an environment|
FR3053509B1|2016-06-30|2019-08-16|Fittingbox|METHOD FOR OCCULATING AN OBJECT IN AN IMAGE OR A VIDEO AND ASSOCIATED AUGMENTED REALITY METHOD|
KR102178566B1|2016-06-30|2020-11-13|주식회사 엘지생활건강|Electronic mirror apparatus and method for controlling the same|
JP6931757B2|2016-09-23|2021-09-08|株式会社インテロール|Signage devices, display methods, and programs|
KR20180035434A|2016-09-29|2018-04-06|삼성전자주식회사|Display apparatus and controlling method thereof|
JP6853475B2|2016-10-14|2021-03-31|フリュー株式会社|Photo creation game console and display method|
TWI610571B|2016-10-26|2018-01-01|緯創資通股份有限公司|Display method, system and computer-readable recording medium thereof|
US10437342B2|2016-12-05|2019-10-08|Youspace, Inc.|Calibration systems and methods for depth-based interfaces with disparate fields of view|
CN108268227B|2017-01-04|2020-12-01|京东方科技集团股份有限公司|Display device|
US10303259B2|2017-04-03|2019-05-28|Youspace, Inc.|Systems and methods for gesture-based interaction|
US10303417B2|2017-04-03|2019-05-28|Youspace, Inc.|Interactive systems for depth-based input|
WO2018187337A1|2017-04-04|2018-10-11|Princeton Identity, Inc.|Z-dimension user feedback biometric system|
US10325184B2|2017-04-12|2019-06-18|Youspace, Inc.|Depth-value classification using forests|
CN108881981A|2017-05-08|2018-11-23|Tcl新技术(惠州)有限公司|One kind is across screen display methods, storage equipment and electronic equipment|
WO2018227349A1|2017-06-12|2018-12-20|美的集团股份有限公司|Control method, controller, intelligent mirror and computer readable storage medium|
KR20200028448A|2017-07-26|2020-03-16|프린스톤 아이덴티티, 인크.|Biometric security system and method|
FR3071723A1|2017-10-04|2019-04-05|Dessintey|DEVICE FOR IMPLEMENTING MIRROR THERAPY AND CORRESPONDING METHOD|
KR101944297B1|2017-12-29|2019-01-31|김정현|Advertising system using smart window|
KR102116526B1|2018-02-12|2020-06-05|김치현|Smart mirror|
JP6684303B2|2018-04-20|2020-04-22|緯創資通股▲ふん▼有限公司Wistron Corporation|Interactive clothing and accessory fitting method and display system thereof|
CN109827646A|2018-12-21|2019-05-31|太原重工股份有限公司|Weighing method and weighing device for powder material|
KR102275172B1|2019-08-21|2021-07-08|주식회사 디지털포토|Method for providing barrel distortion calibration and background change based photo printing service abling to take self id photo|
CN111654621B|2020-05-26|2021-04-16|浙江大学|Dual-focus camera continuous digital zooming method based on convolutional neural network model|
CN111631569A|2020-05-28|2020-09-08|青岛谷力互联科技有限公司|Multi-functional AI intelligence vanity mirror based on thing networking|
CN112001303A|2020-08-21|2020-11-27|四川长虹电器股份有限公司|Television image-keeping device and method|
CN112422821B|2020-11-01|2022-01-04|艾普科创控股有限公司|Intelligent terminal picture shooting and publishing method and system based on Internet|
法律状态:
2020-09-29| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2021-11-23| B350| Update of information on the portal [chapter 15.35 patent gazette]|
2021-12-21| B07A| Application suspended after technical examination (opinion) [chapter 7.1 patent gazette]|
优先权:
申请号 | 申请日 | 专利标题
US201261738957P| true| 2012-12-18|2012-12-18|
US61/738,957|2012-12-18|
US13/843,001|2013-03-15|
US13/843,001|US8982109B2|2005-03-01|2013-03-15|Devices, systems and methods of capturing and displaying appearances|
PCT/US2013/076253|WO2014100250A2|2012-12-18|2013-12-18|Devices, systems and methods of capturing and displaying appearances|
[返回顶部]